Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[BUGFIX] Fix test_zero_sized_dim save/restore of np_shape state #20365

Merged
merged 2 commits into from
Jun 25, 2021

Conversation

DickJC123
Copy link
Contributor

Description

Tensor shapes can be interpreted in one of two modes, depending on whether "numpy shape semantics" have been enabled. There are unittests that test shape handling in both modes, but any one unittest should not make assumptions about the prevailing mode entering the test, nor should it permanently alter the mode for tests that follow (regardless of whether the test passes or fails). This PR fixes the test_operator.py::test_zero_sized_dim test, which was leaving numpy shape semantics set for follow-on tests. Running the test was found to cause failures in a following test_sparse_ndarray.py::def test_sparse_nd_pickle among other tests, as reported in issue #20337. This PR also fixes test_operator.py::test_boolean_mask and test_thread_local.py::test_np_global_shape, which do not properly save and restore the pre-test shape interpretation mode.

When running pytest with xdist and multiple workers, I'm not sure if the same set of tests are run on the same workers. If the assignment of tests to workers is non-deterministic, then failures of susceptible tests may be non-deterministic. In that case, this PR may also fix the non-deterministic issue #19915.

Regarding implementation, I chose not to use the more elegant np_shape() or use_np_shape() decorators in fixing these tests. Instead, I kept the use of set_np_shape() in order to retain some direct testing of this function from the unittests.

Checklist

Essentials

  • [ X] PR's title starts with a category (e.g. [BUGFIX], [MODEL], [TUTORIAL], [FEATURE], [DOC], etc)
  • [ X] Changes are complete (i.e. I finished coding on this PR)
  • [ X] All changes have test coverage
  • [X ] Code is well-documented

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

@DickJC123 DickJC123 requested a review from ptrendx June 19, 2021 03:53
@mxnet-bot
Copy link

Hey @DickJC123 , Thanks for submitting the PR
All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands:

  • To trigger all jobs: @mxnet-bot run ci [all]
  • To trigger specific jobs: @mxnet-bot run ci [job1, job2]

CI supported jobs: [website, unix-cpu, clang, miscellaneous, windows-cpu, edge, unix-gpu, sanity, windows-gpu, centos-gpu, centos-cpu]


Note:
Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin.
All CI tests must pass before the PR can be merged.

Copy link
Member

@ptrendx ptrendx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@leezu
Copy link
Contributor

leezu commented Jun 20, 2021

@mxnet-bot run ci [all]

@mseth10 mseth10 added the pr-awaiting-testing PR is reviewed and waiting CI build and test label Jun 24, 2021
@mseth10 mseth10 added pr-work-in-progress PR is still work in progress pr-awaiting-testing PR is reviewed and waiting CI build and test pr-awaiting-review PR is waiting for code review and removed pr-awaiting-testing PR is reviewed and waiting CI build and test pr-work-in-progress PR is still work in progress labels Jun 25, 2021
@ptrendx ptrendx merged commit dc69b04 into apache:master Jun 25, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants